Search Results: "cord"

9 January 2024

Louis-Philippe V ronneau: 2023 A Musical Retrospective

I ended 2022 with a musical retrospective and very much enjoyed writing that blog post. As such, I have decided to do the same for 2023! From now on, this will probably be an annual thing :) Albums In 2023, I added 73 new albums to my collection nearly 2 albums every three weeks! I listed them below in the order in which I acquired them. I purchased most of these albums when I could and borrowed the rest at libraries. If you want to browse though, I added links to the album covers pointing either to websites where you can buy them or to Discogs when digital copies weren't available. Once again this year, it seems that Punk (mostly O !) and Metal dominate my list, mostly fueled by Angry Metal Guy and the amazing Montr al Skinhead/Punk concert scene. Concerts A trend I started in 2022 was to go to as many concerts of artists I like as possible. I'm happy to report I went to around 80% more concerts in 2023 than in 2022! Looking back at my list, April was quite a busy month... Here are the concerts I went to in 2023: Although metalfinder continues to work as intended, I'm very glad to have discovered the Montr al underground scene has departed from Facebook/Instagram and adopted en masse Gancio, a FOSS community agenda that supports ActivityPub. Our local instance, askapunk.net is pretty much all I could ask for :) That's it for 2023!

8 January 2024

Antoine Beaupr : Last year on this blog

So this blog is now celebrating its 21st birthday (or 20 if you count from zero, or 18 if you want to be pedantic), and I figured I would do this yearly thing of reviewing how that went.

Number of posts 2022 was the official 20th anniversary in any case, and that was one of my best years on record, with 46 posts, surpassed only by the noisy 2005 (62) and matching 2006 (46). 2023, in comparison, was underwhelming: a feeble 11 posts! What happened! Well, I was busy with other things, mostly away from keyboard, that I will not bore you with here... The other thing that happened is that the one-liner I used to collect stats was broken (it counted folders and other unrelated files) and wildly overestimated 2022! Turns out I didn't write that much then:
anarc.at$ ls blog   grep '^[0-9][0-9][0-9][0-9].*.md'   se
d s/-.*//   sort   uniq -c    sort -n -k2
     57 2005
     43 2006
     20 2007
     20 2008
      7 2009
     13 2010
     16 2011
     11 2012
     13 2013
      5 2014
     13 2015
     18 2016
     29 2017
     27 2018
     17 2019
     18 2020
     14 2021
     28 2022
     10 2023
      1 2024
But even that is inaccurate because, in ikiwiki, I can tag any page as being featured on the blog. So we actually need to process the HTML itself because we don't have much better on hand without going through ikiwiki's internals:
anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/   grep 'href="\./'   grep -o 20[0-9][0-9]   sort   uniq -c 
     56 2005
     42 2006
     19 2007
     18 2008
      6 2009
     12 2010
     15 2011
     10 2012
     11 2013
      3 2014
     15 2015
     32 2016
     50 2017
     37 2018
     19 2019
     19 2020
     15 2021
     28 2022
     13 2023
Which puts the top 10 years at:
$ curl -sSL https://anarc.at/blog/   grep 'href="\./'   grep -o 20[0-9][0-9]   sort   uniq -c    sort -nr   head -10
     56 2005
     50 2017
     42 2006
     37 2018
     32 2016
     28 2022
     19 2020
     19 2019
     19 2007
     18 2008
Anyway. 2023 is certainly not a glorious year in that regard, in any case.

Visitors In terms of visits, however, we had quite a few hits. According to Goatcounter, I had 122 300 visits in 2023! 2022, in comparison, had 89 363, so that's quite a rise.

What you read I seem to have hit the Hacker News front page at least twice. I say "seem" because it's actually pretty hard to tell what the HN frontpage actually is on any given day. I had 22k visits on 2023-03-13, in any case, and you can't see me on the front that day. We do see a post of mine on 2023-09-02, all the way down there, which seem to have generated another 10k visits. In any case, here were the most popular stories for you fine visitors:
  • Framework 12th gen laptop review: 24k visits, which is surprising for a 13k words article "without images", as some critics have complained. 15k referred by Hacker News. Good reference and time-consuming benchmarks, slowly bit-rotting. That is, by far, my most popular article ever. A popular article in 2021 or 2022 was around 6k to 9k, so that's a big one. I suspect it will keep getting traffic for a long while.
  • Calibre replacement considerations: 15k visits, most of which without a referrer. Was actually an old article, but I suspect HN brought it back to light. I keep updating that wiki page regularly when I find new things, but I'm still using Calibre to import ebooks.
  • Hacking my Kobo Clara HD: is not new but always gathering more and more hits, it had 1800 hits in the first year, 4600 hits last year and now brought 6400 visitors to the blog! Not directly related, but this iFixit battery replacement guide I wrote also seem to be quite popular
Everything else was published before 2023. Replacing Smokeping with Prometheus is still around and Looking at Wayland terminal emulators makes an entry in the top five.

Where you've been People send less and less private information when they browse the web. The number of visitors without referrers was 41% in 2021, it rose to 44% in 2023. Most of the remaining traffic comes from Google, but Hacker News is now a significant chunk, almost as big as Google. In 2021, Google represented 23% of my traffic, in 2022, it was down to 15% so 18% is actually a rise from last year, even if it seems much smaller than what I usually think of.
Ratio Referrer Visits
18% Google 22 098
13% Hacker News 16 003
2% duckduckgo.com 2 640
1% community.frame.work 1 090
1% missing.csail.mit.edu 918
Note that Facebook and Twitter do not appear at all in my referrers.

Where you are Unsurprisingly, most visits still come from the US:
Ratio Country Visits
26% United States 32 010
14% France 17 046
10% Germany 11 650
6% Canada 7 425
5% United Kingdom 6 473
3% Netherlands 3 436
Those ratios are nearly identical to last year, but quite different from 2021, where Germany and France were more or less reversed. Back in 2021, I mentioned there was a long tail of countries with at least one visit, with 160 countries listed. I expanded that and there's now 182 countries in that list, almost all of the 193 member states in the UN.

What you were Chrome's dominance continues to expand, even on readers of this blog, gaining two percentage points from Firefox compared to 2021.
Ratio Browser Visits
49% Firefox 60 126
36% Chrome 44 052
14% Safari 17 463
1% Others N/A
It seems like, unfortunately, my Lynx and Haiku users have not visited in the past year. It seems like trying to read those metrics is like figuring out tea leaves... In terms of operating systems:
Ratio OS Visits
28% Linux 34 010
23% macOS 28 728
21% Windows 26 303
17% Android 20 614
10% iOS 11 741
Again, Linux and Mac are over-represented, and Android and iOS are under-represented.

What is next I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world... So anyway, thanks for coming, faithful reader, and see you in the coming 2024 year...

Thorsten Alteholz: My Debian Activities in December 2023

FTP master This month I accepted 235 and rejected 13 packages. The overall number of packages that got accepted was 249. I also handled lots of RM bugs and almost stopped the increase in packages this month :-). Please be aware, if you don t want your package to be removed, take care of it and keep it in good shape! Debian LTS This was my hundred-fourteenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded: This month was rather calm and no unexpected things happened. The web team now automatically creates all webpages from data found in the security tracker. So I could deactivate my web-dla script again which created the webpages from the contents of the announcement mailing list. Last but not least I also did two weeks of frontdesk duties. Debian ELTS This month was the sixty-fifth ELTS month. During my allocated time I uploaded: Last but not least I also did two weeks of frontdesk duties. Debian Printing This month I uploaded a package to fix bugs: This work is generously funded by Freexian! Debian Astro This month I uploaded a package to fix bugs: Other stuff This month I uploaded new upstream version of packages, did a source upload for the transition or uploaded it to fix one or the other issue:

31 December 2023

Petter Reinholdtsen: VLC bittorrent plugin still going strong, new upload 2.14-4

The other day I uploaded a new version of the VLC bittorrent plugin to Debian, version 2.14-4, to fix a few packaging issues. This plugin extend VLC allowing it to stream videos directly from a bittorrent source using both torrent files and magnet links, as easy as using a HTTP or local file source. I believe such protocol support is a vital feature in VLC, allowing efficient streaming from sources such at the 11 million movies in the Internet Archive. Bittorrent is one of the most efficient content distribution protocols on the Internet, without centralised control, and should be used more. The new version is now both in Debian Unstable and Testing, as well as Ubuntu. While looking after the package, I decided to ask the VLC upstream community if there was any hope to get Bittorrent support into the official VLC program, and was very happy to learn that someone is already working on it. I hope we can see some fruits of that labour next year, but do not hold my breath. In the mean time we can use the plugin, which is already installed by 0.23 percent of the Debian population according to popularity-contest. It could use a new upstream release, and I hope the upstream developer soon find time to polish it even more. It is worth noting that the plugin store the downloaded files in ~/Downloads/vlc-bittorrent/, which can quickly fill up the user home directory during use. Users of the plugin should keep an eye with disk usage when streaming a bittorrent source. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

30 December 2023

Riku Voipio: Adguard DNS, or how to reduce ads without apps/extensions

Looking at the options for blocking ads, people usually first look at browser extensions. Google's plan is to disable adblock extensions in 2024. The alternative is usually an app (on phones) or a "VPN" that does filtering for you. All these methods are quite heavyweight, and require installing software on your phone or PC. What is less known, is that you can you DNS-over-TLS or DNS-over-HTTPS for ad blocking.
What is DNS-over-TLS and DNS-over-HTTPS
Since Android 9, Google has provided a setting calledPrivate DNS. Traditional DNS is unencrypted UDP so anyone can monitor your requests and/or return false records. With private DNS, DNS-over-TLS or DNS-over-HTTPS is used to guarantee the DNS request is sent to the server you configured. Which Google hopes is of course Google's own public servers. If you do so, your ISP and hotspot providers no longer can monitor, monetize and enshittify your DNS requests - only Google can do so.
Subverting private DNS for ad blocking
This is where AdGuard DNS comes useful. By setting the AdGuard DNS server as your "private DNS" server following the instructions,you can start blocking right away. Note, on PC you can also configure the Adguard DNS server on the Browser settings (Firefox -> Enable secure DNS and Chrome -> Use Secure DNS) instead of configuring a system-wide DNS server. Blocking via DNS, of course, limits effectiveness to ads distributed from 3rd party servers.
Other uses for AdGuard DNS
If you register for Adguard DNS, you get your "own", customizable DNS server address to point to. You can, for example, create your own /etc/hosts style records that are now available to all you devices you have connected to the Adguard DNS server - whether your a are home or not. Of course, you choose to use the personal DNS server, your DNS query privacy is in the hands of AdGuard.
Going further
What else is ruining the web than Ads? Well commercial social media. An article ("Ei n in! Algoritmi hky") from the latest Finnish Magazine SKROLLI (mainos: jos luet suomeksi, Tilaa skrolli!) hit a chord for me. The algorithms of social media sites are designed not to serve you, but to addict you. For example, If you stop to watch a hateful meme image, the algorithm will record "The user spent time watching this, show more of the same!". It doesn't help block or mute - yeah that spefic hate engager will be blocked, but all the dozens similar hate pages will still be shown to you. Worse, the social media sites are being overrun by AI-generated crap. Unfortunately the addictive nature of the algorithms works. You reload in vain, hoping this time the algorithmic god will show something your friends share. How do you cure addiction? By blocking yourself out:
Epilogue
I didn't block myself out of Fediverse - yet. It's not engineered to be addictive, which is also probably why it isn't as popular as the commercial alternatives...

Valhalla's Things: I've been influenced

Posted on December 30, 2023
Tags: madeof:atoms
A woman wearing a red sleeveless dress; from the waist up it is fitted, while the skirt flares out. There is a white border with red embroidery and black fringe at the hem and a belt of the same material at the waist. By the influencers on the famous proprietary video platform1. When I m crafting with no powertools I tend to watch videos, and this autumn I ve seen a few in a row that were making red wool dresses, at least one or two medieval kirtles. I don t remember which channels they were, and I ve decided not to go back and look for them, at least for a time. A woman wearing a red shirt with wide sleeves, a short yoke, a small collar band and 3 buttons in the front. Anyway, my brain suddenly decided that I needed a red wool dress, fitted enough to give some bust support. I had already made a dress that satisfied the latter requirement and I still had more than half of the red wool faille I ve used for the Garibaldi blouse (still not blogged, but I will get to it), and this time I wanted it to be ready for this winter. While the pattern I was going to use is Victorian, it was designed for underwear, and this was designed to be outerwear, so from the very start I decided not to bother too much with any kind of historical details or techniques. A few meters of wool-imitation fringe trim rolled up; the fringe is black and is attached to a white band with a line of lozenge outlines in red and brown. I knew that I didn t have enough fabric to add a flounce to the hem, as in the cotton dress, but then I remembered that some time ago I fell for a piece of fringed trim in black, white and red. I did a quick check that the red wasn t clashing (it wasn t) and I knew I had a plan for the hem decoration. Then I spent a week finishing other projects, and the more I thought about this dress, the more I was tempted to have spiral lacing at the front rather than buttons, as a nod to the kirtle inspiration. It may end up be a bit of a hassle, but if it is too much I can always add a hidden zipper on a side seam, and only have to undo a bit of the lacing around the neckhole to wear the dress. Finally, I could start working on the dress: I cut all of the main pieces, and since the seam lines were quite curved I marked them with tailor s tacks, which I don t exactly enjoy doing or removing, but are the only method that was guaranteed to survive while manipulating this fabric (and not leave traces afterwards). A shaped piece of red fabric with the long edges bound in navy blue bias tape and all the seamlines marked with basting thread. While cutting the front pieces I accidentally cut the high neck line instead of the one I had used on the cotton dress: I decided to go for it also on the back pieces and decide later whether I wanted to lower it. Since this is a modern dress, with no historical accuracy at all, and I have access to a serger, I decided to use some dark blue cotton voile I ve had in my stash for quite some time, cut into bias strip, to bind the raw edges before sewing. This works significantly better than bought bias tape, which is a bit too stiff for this. A bigger piece of fabric with tailor's tacks for the seams and darts; at the top edge there is a strip of navy blue fabric sewn to a wide seaming allowance, with two rows of cording closest to the center front line. For the front opening, I ve decided to reinforce the areas where the lacing holes will be with cotton: I ve used some other navy blue cotton, also from the stash, and added two lines of cording to stiffen the front edge. So I ve cut the front in two pieces rather than on the fold, sewn the reinforcements to the sewing allowances in such a way that the corded edge was aligned with the center front and then sewn the bottom of the front seam from just before the end of the reinforcements to the hem. The front opening being worked on: on one side there are hand sewn eyelets in red silk that matches the fabric, on the other side the position for more eyelets are still marked with pins. There is also still basting to keep the folded allowance in place. The allowances are then folded back, and then they are kept in place by the worked lacing holes. The cotton was pinked, while for the wool I used the selvedge of the fabric and there was no need for any finishing. Behind the opening I ve added a modesty placket: I ve cut a strip of red wool, a strip of cotton, folded the edge of the strip of cotton to the center, added cording to the long sides, pressed the allowances of the wool towards the wrong side, and then handstitched the cotton to the wool, wrong sides facing. This was finally handstitched to one side of the sewing allowance of the center front. I ve also decided to add real pockets, rather than just slits, and for some reason I decided to add them by hand after I had sewn the dress, so I ve left opening in the side back seams, where the slits were in the cotton dress. I ve also already worn the dress, but haven t added the pockets yet, as I m still debating about their shape. This will be fixed in the near future. Another thing that will have to be fixed is the trim situation: I like the fringe at the bottom, and I had enough to also make a belt, but this makes the top of the dress a bit empty. I can t use the same fringe tape, as it is too wide, but it would be nice to have something smaller that matches the patterned part. And I think I can make something suitable with tablet weaving, but I m not sure on which materials to use, so it will have to be on hold for a while, until I decide on the supplies and have the time for making it. Another improvement I d like to add are detached sleeves, both matching (I should still have just enough fabric) and contrasting, but first I want to learn more about real kirtle construction, and maybe start making sleeves that would be suitable also for a real kirtle. Meanwhile, I ve worn it on Christmas (over my 1700s menswear shirt with big sleeves) and may wear it again tomorrow (if I bother to dress up to spend New Year s Eve at home :D )

  1. yep, that s YouTube, of course.

29 December 2023

Ulrike Uhlig: How do kids conceive the internet? - part 4

Read all parts of the series Part 1 // Part 2 // Part 3 // Part 4 I ve been wanting to write this post for over a year, but lacked energy and time. Before 2023 is coming to an end, I want to close this series and share some more insights with you and hopefully provide you with a smile here and there. For this round of interviews, four more kids around the ages of 8 to 13 were interviewed, 3 of them have a US background these 3 interviews were done by a friend who recorded these interviews for me, thank you! As opposed to the previous interviews, these four kids have parents who have a more technical professional background. And this seems to make a difference: even though none of these kids actually knew much better how the internet really works than the other kids that I interviewed, specifically in terms of physical infrastructures, they were much more confident in using the internet, they were able to more correctly name things they see on the internet, and they had partly radical ideas about what they would like to learn or what they would want to change about the internet! Looking at these results, I think it s safe to say that social reproduction is at work and that we need to improve education for kids who do not profit from this type of social and cultural wealth at home. But let s dive into the details.

The boy and the aliens (I ll be mostly transribing the interview, which was short, and which I find difficult to sum up because some of the questions are written in a way to encourage the kids to tell a story, and this particular kid had a thing going on with aliens.) He s a 13 year old boy living in the US. He has his own computer, which technically belongs to his school but can be used by him freely and he can also take it home. He s the first kid saying he s reading the news on the internet; he does not actually use social media, besides sometimes watching TikTok. When asked: Imagine that aliens land and come to you and say: We ve heard about this internet thing you all talk about, what is it? What do you tell them? he replied:
Well, I mean they re aliens, so I don t know if I wanna tell them much.
(Parents laughing in the background.) Let s assume they re friendly aliens.
Well, I would say you can look anything up and play different games. And there are alien games. But mostly the enemies are aliens which you might be a little offended by. And you can get work done, if you needed to spy on humans. There s cameras, you can film yourself, yeah. And you can text people and call people who are far away
And what would be in a drawing that would explain the internet? Google, an alien using Twitch, Google search results, and the interface of an IM software on an iPhone drawn by a 13 year old boy And here s what he explains about his drawing:
First, I would draw what I see when you open a new tab, Google.
On the right side of the drawing we see something like Twitch.
I don t wanna offend the aliens, but you can film yourself playing a game, so here is the alien and he s playing a game.
And then you can ask questions like: How did aliens come to the Earth? And the answer will be here (below). And there ll be different websites that you can click on.
And you can also look up Who won the alien contest? And that would be Usmushgagu, and that guy won the alien contest.
Do you think the information about alien intergalactic football is already on the internet?
Yeah! That s how fast the internet is.
On the bottom of the drawing we see an iPhone and an instant messaging software.
There s also a device called an iPhone and with it you can text your friends. So here s the alien asking: How was ur day? and the friend might answer IDK [I don t know].
Imagine that a wise and friendly dragon could teach you one thing about the internet that you ve always wanted to know. What would you ask the dragon to teach you about?
Is there a way you don t have to pay for any channels or subscriptions and you can get through any firewall?
Imagine you could make the internet better for everyone. What would you do first?
Well you wouldn t have to pay for it [paywalls].
Can you describe what happens between your device and a website when you visit a website?
Well, it takes 0.025 seconds. [ ] It s connecting.
Wow, that s indeed fast! We were not able to obtain more details about what is that fast thing that s happening exactly

The software engineer s kid This kid identifies as neither boy nor girl, is 10 years old and lives in Germany. Their father works as a software engineer, or in the words of the child:
My dad knows everything.
The kid has a laptop and a mobile phone, both with parental control they don t think that the controlling is fair. This kid uses the internet foremostly for listening to music and watching prank channels on Youtube but also to work with Purple Mash (a teaching platform for the computing curriculum used at their school), finding 3d printing models (that they ask their father to print with them because they did not manage to use the printer by themselves yet). Interestingly, and very differently from the non-tech-parent kids, this kid insists on using Firefox and Signal - the latter is not only used by their dad to tell them to come downstairs for dinner, but also to call their grandmother. This kid also shops online, with the help of the father who does the actual shopping for them using money that the kid earned by reading books. If you would need to explain to an alien who has landed on Earth what the internet is, what would you tell them?
The internet is something where you search, for example, you can look for music. You can also watch videos from around the world, and you can program stuff.
Like most of the kids interviewed, this kid uses the internet mostly for media consumption, but with the difference that they also engage with technology by way of programming using Purple Mash. drawing of the internet by a 10 year old showing a Youtube prank channel, an external device trackpad, and headphones In their drawing we see a Youtube prank channel on a screen, an external trackpad on the right (likely it s not a touch screen), and headphones. Notice how there is no keyboard, or maybe it s folded away. If you could ask a nice and friendly dragon anything you d like to learn about the internet, what would it be?
How do I shutdown my dad s computer forever?
And what is it that he would do to improve the internet for everyone? Contrary to the kid living in the US, they think that
It takes too much time to load stuff!
I wonder if this kid experiences the internet as being slow because they use the mobile network or because their connection somehow gets throttled as a way to control media consumption, or if the German internet infrastructure is just so much worse in certain regions If you could improve the internet for everyone, what would you do first? I d make a new Firefox app that loads the internet much faster.

The software engineer s daughter This girl is only 8 years old, she hates unicorns, and her dad is also a software engineer. She uses a smartphone, controlled by her parents. My impression of the interview is that at this age, kids slightly mix up the internet with the devices that they use to access the internet. drawing of the internet by an 8 year old girl, Showing Google and the interface to call and text someone In her drawing, we see again Google - it s clearly everywhere - and also the interfaces for calling and texting someone. To explain what the internet is, besides the fact that one can use it for calling and listening to music, she says:
[The internet] is something that you can [use to] see someone who is far away, so that you don t need to take time to get to them.
Now, that s a great explanation, the internet providing the possibility for communication over a distance :) If she could ask a friendly dragon something she always wanted to know, she d ask how to make her phone come alive:
that it can talk to you, that it can see you, that it can smile and has eyes. It s like a new family member, you can talk to it.
Sounds a bit like Siri, Alexa, or Furby, doesn t it? If you could improve the internet for everyone, what would you do first? She d have the phone be able to decide over her free time, her phone time. That would make the world better, not for the kids, but certainly for the parents.

The antifascist kid This German boy s dad has a background in electrotechnical engineering. He s 10 years old and he told me he s using the internet a lot for searching things for example about his passion: the firefighters. For him, the internet is:
An invisible world. A virtual world. But there s also the darknet.
He told me he always watches that German show on public TV for kids that explains stuff: Checker Tobi. (In 2014, Checker Tobi actually produced an episode about the internet, which I d criticize for having only male characters, except for one female character: a secretary Google, a nice and friendly woman guiding the way through the huge library that s the internet ) This kid was the only one interviewed who managed to actually explain something about the internet, or rather about the hypertextual structure of the web. When I asked him to draw the internet, he made a drawing of a pin board. He explained:
Many items are attached to the pin board, and on the top left corner there s a computer, for example with Youtube and one can navigate like that between all the items, and start again from the beginning when done.
hypertext structure representing the internet drawn by a kid When I asked if he knew what actually happens between the device and a website he visits, he put forth the hypothesis of the existence of some kind of
Waves, internet waves - all this stuff somehow needs to be transmitted.
What he d like to learn:
How to get into the darknet? How do you become a Whitehat? I ve heard these words on the internet, the internet makes me clever.
And what would he change on the internet if he could?
I want that right wing extreme stuff is not accessible anymore, or at least, that it rains turds ( Kackw rste ) whenever people watch such stuff. Or that people are always told: This video is scum.
I suspect that his father has been talking with him about these things, and maybe these are also subjects he heard about when listening to punk music (he told me he does), or browsing Youtube.

Future projects To me this has been pretty insightful. I might share some more internet drawings by adults in the future, which I think are also really interesting, as they show very different things depending on the age of the person. I ve been using the information gathered to work on a children s book which I hope to be able to share with you next year.

27 December 2023

Bits from Debian: Statement about the EU Cyber Resilience Act

Debian Public Statement about the EU Cyber Resilience Act and the Product Liability Directive The European Union is currently preparing a regulation "on horizontal cybersecurity requirements for products with digital elements" known as the Cyber Resilience Act (CRA). It is currently in the final "trilogue" phase of the legislative process. The act includes a set of essential cybersecurity and vulnerability handling requirements for manufacturers. It will require products to be accompanied by information and instructions to the user. Manufacturers will need to perform risk assessments and produce technical documentation and, for critical components, have third-party audits conducted. Discovered security issues will have to be reported to European authorities within 25 hours (1). The CRA will be followed up by the Product Liability Directive (PLD) which will introduce compulsory liability for software. While a lot of these regulations seem reasonable, the Debian project believes that there are grave problems for Free Software projects attached to them. Therefore, the Debian project issues the following statement:
  1. Free Software has always been a gift, freely given to society, to take and to use as seen fit, for whatever purpose. Free Software has proven to be an asset in our digital age and the proposed EU Cyber Resilience Act is going to be detrimental to it. a. As the Debian Social Contract states, our goal is "make the best system we can, so that free works will be widely distributed and used." Imposing requirements such as those proposed in the act makes it legally perilous for others to redistribute our work and endangers our commitment to "provide an integrated system of high-quality materials with no legal restrictions that would prevent such uses of the system". (2) b. Knowing whether software is commercial or not isn't feasible, neither in Debian nor in most free software projects - we don't track people's employment status or history, nor do we check who finances upstream projects (the original projects that we integrate in our operating system). c. If upstream projects stop making available their code for fear of being in the scope of CRA and its financial consequences, system security will actually get worse rather than better. d. Having to get legal advice before giving a gift to society will discourage many developers, especially those without a company or other organisation supporting them.
  2. Debian is well known for its security track record through practices of responsible disclosure and coordination with upstream developers and other Free Software projects. We aim to live up to the commitment made in the Debian Social Contract: "We will not hide problems." (3) a.The Free Software community has developed a fine-tuned, tried-and-tested system of responsible disclosure in case of security issues which will be overturned by the mandatory reporting to European authorities within 24 hours (Art. 11 CRA). b. Debian spends a lot of volunteering time on security issues, provides quick security updates and works closely together with upstream projects and in coordination with other vendors. To protect its users, Debian regularly participates in limited embargos to coordinate fixes to security issues so that all other major Linux distributions can also have a complete fix when the vulnerability is disclosed. c. Security issue tracking and remediation is intentionally decentralized and distributed. The reporting of security issues to ENISA and the intended propagation to other authorities and national administrations would collect all software vulnerabilities in one place. This greatly increases the risk of leaking information about vulnerabilities to threat actors, representing a threat for all the users around the world, including European citizens. d. Activists use Debian (e.g. through derivatives such as Tails), among other reasons, to protect themselves from authoritarian governments; handing threat actors exploits they can use for oppression is against what Debian stands for. e. Developers and companies will downplay security issues because a "security" issue now comes with legal implications. Less clarity on what is truly a security issue will hurt users by leaving them vulnerable.
  3. While proprietary software is developed behind closed doors, Free Software development is done in the open, transparent for everyone. To retain parity with proprietary software the open development process needs to be entirely exempt from CRA requirements, just as the development of software in private is. A "making available on the market" can only be considered after development is finished and the software is released.
  4. Even if only "commercial activities" are in the scope of CRA, the Free Software community - and as a consequence, everybody - will lose a lot of small projects. CRA will force many small enterprises and most probably all self employed developers out of business because they simply cannot fulfill the requirements imposed by CRA. Debian and other Linux distributions depend on their work. If accepted as it is, CRA will undermine not only an established community but also a thriving market. CRA needs an exemption for small businesses and, at the very least, solo-entrepreneurs.

Information about the voting process: Debian uses the Condorcet method for voting. Simplistically, plain Condorcets method can be stated like so : "Consider all possible two-way races between candidates. The Condorcet winner, if there is one, is the one candidate who can beat each other candidate in a two-way race with that candidate." The problem is that in complex elections, there may well be a circular relationship in which A beats B, B beats C, and C beats A. Most of the variations on Condorcet use various means of resolving the tie. Debian's variation is spelled out in the constitution, specifically, A.5(3) Sources: (1) CRA proposals and links & PLD proposals and links (2) Debian Social Contract No. 2, 3, and 4 (3) Debian Constitution

22 December 2023

Gunnar Wolf: Pushing some reviews this way

Over roughly the last year and a half I have been participating as a reviewer in ACM s Computing Reviews, and have even been honored as a Featured Reviewer. Given I have long enjoyed reading friends reviews of their reading material (particularly, hats off to the very active Russ Allbery, who both beats all of my frequency expectations (I could never sustain the rythm he reads to!) and holds documented records for his >20 years as a book reader, with far more clarity and readability than I can aim for!), I decided to explicitly share my reviews via this blog, as the audience is somewhat congruent; I will also link here some reviews that were not approved for publication, clearly marking them so. I will probably work on wrangling my Jekyll site to display an (auto-)updated page and RSS feed for the reviews. In the meantime, the reviews I have published are:

20 December 2023

Melissa Wen: The Rainbow Treasure Map Talk: Advanced color management on Linux with AMD/Steam Deck.

Last week marked a major milestone for me: the AMD driver-specific color management properties reached the upstream linux-next! And to celebrate, I m happy to share the slides notes from my 2023 XDC talk, The Rainbow Treasure Map along with the individual recording that just dropped last week on youtube talk about happy coincidences!

Steam Deck Rainbow: Treasure Map & Magic Frogs While I may be bubbly and chatty in everyday life, the stage isn t exactly my comfort zone (hallway talks are more my speed). But the journey of developing the AMD color management properties was so full of discoveries that I simply had to share the experience. Witnessing the fantastic work of Jeremy and Joshua bring it all to life on the Steam Deck OLED was like uncovering magical ingredients and whipping up something truly enchanting. For XDC 2023, we split our Rainbow journey into two talks. My focus, The Rainbow Treasure Map, explored the new color features we added to the Linux kernel driver, diving deep into the hardware capabilities of AMD/Steam Deck. Joshua then followed with The Rainbow Frogs and showed the breathtaking color magic released on Gamescope thanks to the power unlocked by the kernel driver s Steam Deck color properties.

Packing a Rainbow into 15 Minutes I had so much to tell, but a half-slot talk meant crafting a concise presentation. To squeeze everything into 15 minutes (and calm my pre-talk jitters a bit!), I drafted and practiced those slides and notes countless times. So grab your map, and let s embark on the Rainbow journey together! Slide 1: The Rainbow Treasure Map - Advanced Color Management on Linux with AMD/SteamDeck Intro: Hi, I m Melissa from Igalia and welcome to the Rainbow Treasure Map, a talk about advanced color management on Linux with AMD/SteamDeck. Slide 2: List useful links for this technical talk Useful links: First of all, if you are not used to the topic, you may find these links useful.
  1. XDC 2022 - I m not an AMD expert, but - Melissa Wen
  2. XDC 2022 - Is HDR Harder? - Harry Wentland
  3. XDC 2022 Lightning - HDR Workshop Summary - Harry Wentland
  4. Color management and HDR documentation for FOSS graphics - Pekka Paalanen et al.
  5. Cinematic Color - 2012 SIGGRAPH course notes - Jeremy Selan
  6. AMD Driver-specific Properties for Color Management on Linux (Part 1) - Melissa Wen
Slide 3: Why do we need advanced color management on Linux? Context: When we talk about colors in the graphics chain, we should keep in mind that we have a wide variety of source content colorimetry, a variety of output display devices and also the internal processing. Users expect consistent color reproduction across all these devices. The userspace can use GPU-accelerated color management to get it. But this also requires an interface with display kernel drivers that is currently missing from the DRM/KMS framework. Slide 4: Describe our work on AMD driver-specific color properties Since April, I ve been bothering the DRM community by sending patchsets from the work of me and Joshua to add driver-specific color properties to the AMD display driver. In parallel, discussions on defining a generic color management interface are still ongoing in the community. Moreover, we are still not clear about the diversity of color capabilities among hardware vendors. To bridge this gap, we defined a color pipeline for Gamescope that fits the latest versions of AMD hardware. It delivers advanced color management features for gamut mapping, HDR rendering, SDR on HDR, and HDR on SDR. Slide 5: Describe the AMD/SteamDeck - our hardware AMD/Steam Deck hardware: AMD frequently releases new GPU and APU generations. Each generation comes with a DCN version with display hardware improvements. Therefore, keep in mind that this work uses the AMD Steam Deck hardware and its kernel driver. The Steam Deck is an APU with a DCN3.01 display driver, a DCN3 family. It s important to have this information since newer AMD DCN drivers inherit implementations from previous families but aldo each generation of AMD hardware may introduce new color capabilities. Therefore I recommend you to familiarize yourself with the hardware you are working on. Slide 6: Diagram with the three layers of the AMD display driver on Linux The AMD display driver in the kernel space: It consists of three layers, (1) the DRM/KMS framework, (2) the AMD Display Manager, and (3) the AMD Display Core. We extended the color interface exposed to userspace by leveraging existing DRM resources and connecting them using driver-specific functions for color property management. Slide 7: Three-layers diagram highlighting AMD Display Manager, DM - the layer that connects DC and DRM Bridging DC color capabilities and the DRM API required significant changes in the color management of AMD Display Manager - the Linux-dependent part that connects the AMD DC interface to the DRM/KMS framework. Slide 8: Three-layers diagram highlighting AMD Display Core, DC - the shared code The AMD DC is the OS-agnostic layer. Its code is shared between platforms and DCN versions. Examining this part helps us understand the AMD color pipeline and hardware capabilities, since the machinery for hardware settings and resource management are already there. Slide 9: Diagram of the AMD Display Core Next architecture with main elements and data flow The newest architecture for AMD display hardware is the AMD Display Core Next. Slide 10: Diagram of the AMD Display Core Next where only DPP and MPC blocks are highlighted In this architecture, two blocks have the capability to manage colors:
  • Display Pipe and Plane (DPP) - for pre-blending adjustments;
  • Multiple Pipe/Plane Combined (MPC) - for post-blending color transformations.
Let s see what we have in the DRM API for pre-blending color management. Slide 11: Blank slide with no content only a title 'Pre-blending: DRM plane' DRM plane color properties: This is the DRM color management API before blending. Nothing! Except two basic DRM plane properties: color_encoding and color_range for the input colorspace conversion, that is not covered by this work. Slide 12: Diagram with color capabilities and structures in AMD DC layer without any DRM plane color interface (before blending), only the DRM CRTC color interface for post blending In case you re not familiar with AMD shared code, what we need to do is basically draw a map and navigate there! We have some DRM color properties after blending, but nothing before blending yet. But much of the hardware programming was already implemented in the AMD DC layer, thanks to the shared code. Slide 13: Previous Diagram with a rectangle to highlight the empty space in the DRM plane interface that will be filled by AMD plane properties Still both the DRM interface and its connection to the shared code were missing. That s when the search begins! Slide 14: Color Pipeline Diagram with the plane color interface filled by AMD plane properties but without connections to AMD DC resources AMD driver-specific color pipeline: Looking at the color capabilities of the hardware, we arrive at this initial set of properties. The path wasn t exactly like that. We had many iterations and discoveries until reached to this pipeline. Slide 15: Color Pipeline Diagram connecting AMD plane degamma properties, LUT and TF, to AMD DC resources The Plane Degamma is our first driver-specific property before blending. It s used to linearize the color space from encoded values to light linear values. Slide 16: Describe plane degamma properties and hardware capabilities We can use a pre-defined transfer function or a user lookup table (in short, LUT) to linearize the color space. Pre-defined transfer functions for plane degamma are hardcoded curves that go to a specific hardware block called DPP Degamma ROM. It supports the following transfer functions: sRGB EOTF, BT.709 inverse OETF, PQ EOTF, and pure power curves Gamma 2.2, Gamma 2.4 and Gamma 2.6. We also have a one-dimensional LUT. This 1D LUT has four thousand ninety six (4096) entries, the usual 1D LUT size in the DRM/KMS. It s an array of drm_color_lut that goes to the DPP Gamma Correction block. Slide 17: Color Pipeline Diagram connecting AMD plane CTM property to AMD DC resources We also have now a color transformation matrix (CTM) for color space conversion. Slide 18: Describe plane CTM property and hardware capabilities It s a 3x4 matrix of fixed points that goes to the DPP Gamut Remap Block. Both pre- and post-blending matrices were previously gone to the same color block. We worked on detaching them to clear both paths. Now each CTM goes on its own way. Slide 19: Color Pipeline Diagram connecting AMD plane HDR multiplier property to AMD DC resources Next, the HDR Multiplier. HDR Multiplier is a factor applied to the color values of an image to increase their overall brightness. Slide 20: Describe plane HDR mult property and hardware capabilities This is useful for converting images from a standard dynamic range (SDR) to a high dynamic range (HDR). As it can range beyond [0.0, 1.0] subsequent transforms need to use the PQ(HDR) transfer functions. Slide 21: Color Pipeline Diagram connecting AMD plane shaper properties, LUT and TF, to AMD DC resources And we need a 3D LUT. But 3D LUT has a limited number of entries in each dimension, so we want to use it in a colorspace that is optimized for human vision. It means in a non-linear space. To deliver it, userspace may need one 1D LUT before 3D LUT to delinearize content and another one after to linearize content again for blending. Slide 22: Describe plane shaper properties and hardware capabilities The pre-3D-LUT curve is called Shaper curve. Unlike Degamma TF, there are no hardcoded curves for shaper TF, but we can use the AMD color module in the driver to build the following shaper curves from pre-defined coefficients. The color module combines the TF and the user LUT values into the LUT that goes to the DPP Shaper RAM block. Slide 23: Color Pipeline Diagram connecting AMD plane 3D LUT property to AMD DC resources Finally, our rockstar, the 3D LUT. 3D LUT is perfect for complex color transformations and adjustments between color channels. Slide 24: Describe plane 3D LUT property and hardware capabilities 3D LUT is also more complex to manage and requires more computational resources, as a consequence, its number of entries is usually limited. To overcome this restriction, the array contains samples from the approximated function and values between samples are estimated by tetrahedral interpolation. AMD supports 17 and 9 as the size of a single-dimension. Blue is the outermost dimension, red the innermost. Slide 25: Color Pipeline Diagram connecting AMD plane blend properties, LUT and TF, to AMD DC resources As mentioned, we need a post-3D-LUT curve to linearize the color space before blending. This is done by Blend TF and LUT. Slide 26: Describe plane blend properties and hardware capabilities Similar to shaper TF, there are no hardcoded curves for Blend TF. The pre-defined curves are the same as the Degamma block, but calculated by the color module. The resulting LUT goes to the DPP Blend RAM block. Slide 27: Color Pipeline Diagram  with all AMD plane color properties connect to AMD DC resources and links showing the conflict between plane and CRTC degamma Now we have everything connected before blending. As a conflict between plane and CRTC Degamma was inevitable, our approach doesn t accept that both are set at the same time. Slide 28: Color Pipeline Diagram connecting AMD CRTC gamma TF property to AMD DC resources We also optimized the conversion of the framebuffer to wire encoding by adding support to pre-defined CRTC Gamma TF. Slide 29: Describe CRTC gamma TF property and hardware capabilities Again, there are no hardcoded curves and TF and LUT are combined by the AMD color module. The same types of shaper curves are supported. The resulting LUT goes to the MPC Gamma RAM block. Slide 30: Color Pipeline Diagram with all AMD driver-specific color properties connect to AMD DC resources Finally, we arrived in the final version of DRM/AMD driver-specific color management pipeline. With this knowledge, you re ready to better enjoy the rainbow treasure of AMD display hardware and the world of graphics computing. Slide 31: SteamDeck/Gamescope Color Pipeline Diagram with rectangles labeling each block of the pipeline with the related AMD color property With this work, Gamescope/Steam Deck embraces the color capabilities of the AMD GPU. We highlight here how we map the Gamescope color pipeline to each AMD color block. Slide 32: Final slide. Thank you! Future works: The search for the rainbow treasure is not over! The Linux DRM subsystem contains many hidden treasures from different vendors. We want more complex color transformations and adjustments available on Linux. We also want to expose all GPU color capabilities from all hardware vendors to the Linux userspace. Thanks Joshua and Harry for this joint work and the Linux DRI community for all feedback and reviews. The amazing part of this work comes in the next talk with Joshua and The Rainbow Frogs! Any questions?
References:
  1. Slides of the talk The Rainbow Treasure Map.
  2. Youtube video of the talk The Rainbow Treasure Map.
  3. Patch series for AMD driver-specific color management properties (upstream Linux 6.8v).
  4. SteamDeck/Gamescope color management pipeline
  5. XDC 2023 website.
  6. Igalia website.

13 December 2023

Melissa Wen: 15 Tips for Debugging Issues in the AMD Display Kernel Driver

A self-help guide for examining and debugging the AMD display driver within the Linux kernel/DRM subsystem. It s based on my experience as an external developer working on the driver, and are shared with the goal of helping others navigate the driver code. Acknowledgments: These tips were gathered thanks to the countless help received from AMD developers during the driver development process. The list below was obtained by examining open source code, reviewing public documentation, playing with tools, asking in public forums and also with the help of my former GSoC mentor, Rodrigo Siqueira.

Pre-Debugging Steps: Before diving into an issue, it s crucial to perform two essential steps: 1) Check the latest changes: Ensure you re working with the latest AMD driver modifications located in the amd-staging-drm-next branch maintained by Alex Deucher. You may also find bug fixes for newer kernel versions on branches that have the name pattern drm-fixes-<date>. 2) Examine the issue tracker: Confirm that your issue isn t already documented and addressed in the AMD display driver issue tracker. If you find a similar issue, you can team up with others and speed up the debugging process.

Understanding the issue: Do you really need to change this? Where should you start looking for changes? 3) Is the issue in the AMD kernel driver or in the userspace?: Identifying the source of the issue is essential regardless of the GPU vendor. Sometimes this can be challenging so here are some helpful tips:
  • Record the screen: Capture the screen using a recording app while experiencing the issue. If the bug appears in the capture, it s likely a userspace issue, not the kernel display driver.
  • Analyze the dmesg log: Look for error messages related to the display driver in the dmesg log. If the error message appears before the message [drm] Display Core v... , it s not likely a display driver issue. If this message doesn t appear in your log, the display driver wasn t fully loaded and you will see a notification that something went wrong here.
4) AMD Display Manager vs. AMD Display Core: The AMD display driver consists of two components:
  • Display Manager (DM): This component interacts directly with the Linux DRM infrastructure. Occasionally, issues can arise from misinterpretations of DRM properties or features. If the issue doesn t occur on other platforms with the same AMD hardware - for example, only happens on Linux but not on Windows - it s more likely related to the AMD DM code.
  • Display Core (DC): This is the platform-agnostic part responsible for setting and programming hardware features. Modifications to the DC usually require validation on other platforms, like Windows, to avoid regressions.
5) Identify the DC HW family: Each AMD GPU has variations in its hardware architecture. Features and helpers differ between families, so determining the relevant code for your specific hardware is crucial.
  • Find GPU product information in Linux/AMD GPU documentation
  • Check the dmesg log for the Display Core version (since this commit in Linux kernel 6.3v). For example:
    • [drm] Display Core v3.2.241 initialized on DCN 2.1
    • [drm] Display Core v3.2.237 initialized on DCN 3.0.1

Investigating the relevant driver code: Keep from letting unrelated driver code to affect your investigation. 6) Narrow the code inspection down to one DC HW family: the relevant code resides in a directory named after the DC number. For example, the DCN 3.0.1 driver code is located at drivers/gpu/drm/amd/display/dc/dcn301. We all know that the AMD s shared code is huge and you can use these boundaries to rule out codes unrelated to your issue. 7) Newer families may inherit code from older ones: you can find dcn301 using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks and helpers your driver utilizes to investigate the right portion. You can leverage ftrace for supplemental validation. To give an example, it was useful when I was updating DCN3 color mapping to correctly use their new post-blending color capabilities, such as: Additionally, you can use two different HW families to compare behaviours. If you see the issue in one but not in the other, you can compare the code and understand what has changed and if the implementation from a previous family doesn t fit well the new HW resources or design. You can also count on the help of the community on the Linux AMD issue tracker to validate your code on other hardware and/or systems. This approach helped me debug a 2-year-old issue where the cursor gamma adjustment was incorrect in DCN3 hardware, but working correctly for DCN2 family. I solved the issue in two steps, thanks for community feedback and validation: 8) Check the hardware capability screening in the driver: You can currently find a list of display hardware capabilities in the drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c file. More precisely in the dcn*_resource_construct() function. Using DCN301 for illustration, here is the list of its hardware caps:
	/*************************************************
	 *  Resource + asic cap harcoding                *
	 *************************************************/
	pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
	pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
	pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
	dc->caps.max_downscale_ratio = 600;
	dc->caps.i2c_speed_in_khz = 100;
	dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
	dc->caps.max_cursor_size = 256;
	dc->caps.min_horizontal_blanking_period = 80;
	dc->caps.dmdata_alloc_size = 2048;
	dc->caps.max_slave_planes = 2;
	dc->caps.max_slave_yuv_planes = 2;
	dc->caps.max_slave_rgb_planes = 2;
	dc->caps.is_apu = true;
	dc->caps.post_blend_color_processing = true;
	dc->caps.force_dp_tps4_for_cp2520 = true;
	dc->caps.extended_aux_timeout_support = true;
	dc->caps.dmcub_support = true;
	/* Color pipeline capabilities */
	dc->caps.color.dpp.dcn_arch = 1;
	dc->caps.color.dpp.input_lut_shared = 0;
	dc->caps.color.dpp.icsc = 1;
	dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
	dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
	dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
	dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
	dc->caps.color.dpp.dgam_rom_caps.pq = 1;
	dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
	dc->caps.color.dpp.post_csc = 1;
	dc->caps.color.dpp.gamma_corr = 1;
	dc->caps.color.dpp.dgam_rom_for_yuv = 0;
	dc->caps.color.dpp.hw_3d_lut = 1;
	dc->caps.color.dpp.ogam_ram = 1;
	// no OGAM ROM on DCN301
	dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
	dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.dpp.ogam_rom_caps.pq = 0;
	dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
	dc->caps.color.dpp.ocsc = 0;
	dc->caps.color.mpc.gamut_remap = 1;
	dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
	dc->caps.color.mpc.ogam_ram = 1;
	dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
	dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
	dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
	dc->caps.color.mpc.ogam_rom_caps.pq = 0;
	dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
	dc->caps.color.mpc.ocsc = 1;
	dc->caps.dp_hdmi21_pcon_support = true;
	/* read VBIOS LTTPR caps */
	if (ctx->dc_bios->funcs->get_lttpr_caps)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_lttpr_enable = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
		dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
	 
	if (ctx->dc_bios->funcs->get_lttpr_interop)  
		enum bp_result bp_query_result;
		uint8_t is_vbios_interop_enabled = 0;
		bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
		dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
	 
Keep in mind that the documentation of color capabilities are available at the Linux kernel Documentation.

Understanding the development history: What has brought us to the current state? 9) Pinpoint relevant commits: Use git log and git blame to identify commits targeting the code section you re interested in. 10) Track regressions: If you re examining the amd-staging-drm-next branch, check for regressions between DC release versions. These are defined by DC_VER in the drivers/gpu/drm/amd/display/dc/dc.h file. Alternatively, find a commit with this format drm/amd/display: 3.2.221 that determines a display release. It s useful for bisecting. This information helps you understand how outdated your branch is and identify potential regressions. You can consider each DC_VER takes around one week to be bumped. Finally, check testing log of each release in the report provided on the amd-gfx mailing list, such as this one Tested-by: Daniel Wheeler:

Reducing the inspection area: Focus on what really matters. 11) Identify involved HW blocks: This helps isolate the issue. You can find more information about DCN HW blocks in the DCN Overview documentation. In summary:
  • Plane issues are closer to HUBP and DPP.
  • Blending/Stream issues are closer to MPC, OPP and OPTC. They are related to DRM CRTC subjects.
This information was useful when debugging a hardware rotation issue where the cursor plane got clipped off in the middle of the screen. Finally, the issue was addressed by two patches: 12) Issues around bandwidth (glitches) and clocks: May be affected by calculations done in these HW blocks and HW specific values. The recalculation equations are found in the DML folder. DML stands for Display Mode Library. It s in charge of all required configuration parameters supported by the hardware for multiple scenarios. See more in the AMD DC Overview kernel docs. It s a math library that optimally configures hardware to find the best balance between power efficiency and performance in a given scenario. Finding some clk variables that affect device behavior may be a sign of it. It s hard for a external developer to debug this part, since it involves information from HW specs and firmware programming that we don t have access. The best option is to provide all relevant debugging information you have and ask AMD developers to check the values from your suspicions.
  • Do a trick: If you suspect the power setup is degrading performance, try setting the amount of power supplied to the GPU to the maximum and see if it affects the system behavior with this command: sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
I learned it when debugging glitches with hardware cursor rotation on Steam Deck. My first attempt was changing the clock calculation. In the end, Rodrigo Siqueira proposed the right solution targeting bandwidth in two steps:

Checking implicit programming and hardware limitations: Bring implicit programming to the level of consciousness and recognize hardware limitations. 13) Implicit update types: Check if the selected type for atomic update may affect your issue. The update type depends on the mode settings, since programming some modes demands more time for hardware processing. More details in the source code:
/* Surface update type is used by dc_update_surfaces_and_stream
 * The update type is determined at the very beginning of the function based
 * on parameters passed in and decides how much programming (or updating) is
 * going to be done during the call.
 *
 * UPDATE_TYPE_FAST is used for really fast updates that do not require much
 * logical calculations or hardware register programming. This update MUST be
 * ISR safe on windows. Currently fast update will only be used to flip surface
 * address.
 *
 * UPDATE_TYPE_MED is used for slower updates which require significant hw
 * re-programming however do not affect bandwidth consumption or clock
 * requirements. At present, this is the level at which front end updates
 * that do not require us to run bw_calcs happen. These are in/out transfer func
 * updates, viewport offset changes, recout size changes and pixel
depth changes.
 * This update can be done at ISR, but we want to minimize how often
this happens.
 *
 * UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
 * bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
 * end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
 * gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
 * a full update. This cannot be done at ISR level and should be a rare event.
 * Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
 * underscan we don't expect to see this call at all.
 */
enum surface_update_type  
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED,  /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
 ;

Using tools: Observe the current state, validate your findings, continue improvements. 14) Use AMD tools to check hardware state and driver programming: help on understanding your driver settings and checking the behavior when changing those settings.
  • DC Visual confirmation: Check multiple planes and pipe split policy.
  • DTN logs: Check display hardware state, including rotation, size, format, underflow, blocks in use, color block values, etc.
  • UMR: Check ASIC info, register values, KMS state - links and elements (framebuffers, planes, CRTCs, connectors). Source: UMR project documentation
15) Use generic DRM/KMS tools:
  • IGT test tools: Use generic KMS tests or develop your own to isolate the issue in the kernel space. Compare results across different GPU vendors to understand their implementations and find potential solutions. Here AMD also has specific IGT tests for its GPUs that is expect to work without failures on any AMD GPU. You can check results of HW-specific tests using different display hardware families or you can compare expected differences between the generic workflow and AMD workflow.
  • drm_info: This tool summarizes the current state of a display driver (capabilities, properties and formats) per element of the DRM/KMS workflow. Output can be helpful when reporting bugs.

Don t give up! Debugging issues in the AMD display driver can be challenging, but by following these tips and leveraging available resources, you can significantly improve your chances of success. Worth mentioning: This blog post builds upon my talk, I m not an AMD expert, but presented at the 2022 XDC. It shares guidelines that helped me debug AMD display issues as an external developer of the driver. Open Source Display Driver: The Linux kernel/AMD display driver is open source, allowing you to actively contribute by addressing issues listed in the official tracker. Tackling existing issues or resolving your own can be a rewarding learning experience and a valuable contribution to the community. Additionally, the tracker serves as a valuable resource for finding similar bugs, troubleshooting tips, and suggestions from AMD developers. Finally, it s a platform for seeking help when needed. Remember, contributing to the open source community through issue resolution and collaboration is mutually beneficial for everyone involved.

12 December 2023

Raju Devidas: Nextcloud AIO install with docker-compose and nginx reverse proxy

Nextcloud AIO install with docker-compose and nginx reverse proxyNextcloud is a popular self-hosted solution for file sync and share as well as cloud apps such as document editing, chat and talk, calendar, photo gallery etc. This guide will walk you through setting up Nextcloud AIO using Docker Compose. This blog post would not be possible without immense help from Sahil Dhiman a.k.a. sahilisterThere are various ways in which the installation could be done, in our setup here are the pre-requisites.

Step 1 : The docker-compose file for nextcloud AIOThe original compose.yml file is present in nextcloud AIO&aposs git repo here . By taking a reference of that file, we have own compose.yml here.
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
      - /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don&apost forget to also set &aposWATCHTOWER_DOCKER_SOCKET_PATH&apos!
    ports:
      - 8080:8080
    environment: # Is needed when using any of the options below
      # - AIO_DISABLE_BACKUP_SECTION=false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
      - APACHE_PORT=32323 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      - APACHE_IP_BINDING=127.0.0.1 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
      # - BORG_RETENTION_POLICY=--keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
      # - COLLABORA_SECCOMP_DISABLED=false # Setting this to true allows to disable Collabora&aposs Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
      - NEXTCLOUD_DATADIR=/opt/docker/cloud.raju.dev/nextcloud # Allows to set the host directory for Nextcloud&aposs datadir.   Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
      # - NEXTCLOUD_MOUNT=/mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
      # - NEXTCLOUD_UPLOAD_LIMIT=10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
      # - NEXTCLOUD_MAX_TIME=3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
      # - NEXTCLOUD_MEMORY_LIMIT=512M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
      # - NEXTCLOUD_TRUSTED_CACERTS_DIR=/path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
      # - NEXTCLOUD_STARTUP_APPS=deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
      # - NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
      # - NEXTCLOUD_ENABLE_DRI_DEVICE=true # This allows to enable the /dev/dri device in the Nextcloud container.   Warning: this only works if the &apos/dev/dri&apos device is present on the host! If it should not exist on your host, don&apost set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
      # - NEXTCLOUD_KEEP_DISABLED_APPS=false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
      # - TALK_PORT=3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
      # - WATCHTOWER_DOCKER_SOCKET_PATH=/var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default &apos/var/run/docker.sock&apos. Otherwise mastercontainer updates will fail. For macos it needs to be &apos/var/run/docker.sock&apos
    # networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
      # - SKIP_DOMAIN_VALIDATION=true
    # # Uncomment the following line when using SELinux
    # security_opt: ["label:disable"]
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
I have not removed many of the commented options in the compose file, for a possibility of me using them in the future.If you want a smaller cleaner compose with the extra options, you can refer to
services:
  nextcloud-aio-mastercontainer:
    image: nextcloud/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /var/run/docker.sock:/var/run/docker.sock:ro
    ports:
      - 8080:8080
    environment:
      - APACHE_PORT=32323
      - APACHE_IP_BINDING=127.0.0.1
      - NEXTCLOUD_DATADIR=/opt/docker/nextcloud
volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer
I am using a separate directory to store nextcloud data. As per nextcloud documentation you should be using a separate partition if you want to use this feature, however I did not have that option on my server, so I used a separate directory instead. Also we use a custom port on which nextcloud listens for operations, we have set it up as 32323 above, but you can use any in the permissible port range. The 8080 port is used the setup the AIO management interface. Both 8080 and the APACHE_PORT do not need to be open on the host machine, as we will be using reverse proxy setup with nginx to direct requests. once you have your preferred compose.yml file, you can start the containers using
$ docker-compose -f compose.yml up -d 
Creating network "clouddev_default" with the default driver
Creating volume "nextcloud_aio_mastercontainer" with default driver
Creating nextcloud-aio-mastercontainer ... done
once your container&aposs are running, we can do the nginx setup.

Step 2: Configuring nginx reverse proxy for our domain on host. A reference nginx configuration for nextcloud AIO is given in the nextcloud git repository here . You can modify the configuration file according to your needs and setup. Here is configuration that we are using

map $http_upgrade $connection_upgrade  
    default upgrade;
    &apos&apos close;
 
server  
    listen 80;
    #listen [::]:80;            # comment to disable IPv6
    if ($scheme = "http")  
        return 301 https://$host$request_uri;
     
    listen 443 ssl http2;      # for nginx versions below v1.25.1
    #listen [::]:443 ssl http2; # for nginx versions below v1.25.1 - comment to disable IPv6
    # listen 443 ssl;      # for nginx v1.25.1+
    # listen [::]:443 ssl; # for nginx v1.25.1+ - keep comment to disable IPv6
    # http2 on;                                 # uncomment to enable HTTP/2        - supported on nginx v1.25.1+
    # http3 on;                                 # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # quic_retry on;                            # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # add_header Alt-Svc &aposh3=":443"; ma=86400&apos; # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+
    # listen 443 quic reuseport;       # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport
    # listen [::]:443 quic reuseport;  # uncomment to enable HTTP/3 / QUIC - supported on nginx v1.25.0+ - please remove "reuseport" if there is already another quic listener on port 443 with enabled reuseport - keep comment to disable IPv6
    server_name cloud.example.com;
    location /  
        proxy_pass http://127.0.0.1:32323$request_uri;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Port $server_port;
        proxy_set_header X-Forwarded-Scheme $scheme;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Accept-Encoding "";
        proxy_set_header Host $host;
    
        client_body_buffer_size 512k;
        proxy_read_timeout 86400s;
        client_max_body_size 0;
        # Websocket
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $connection_upgrade;
     
    ssl_certificate /etc/letsencrypt/live/cloud.example.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/cloud.example.com/privkey.pem; # managed by Certbot
    ssl_session_timeout 1d;
    ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
    ssl_session_tickets off;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305;
    ssl_prefer_server_ciphers on;
    # Optional settings:
    # OCSP stapling
    # ssl_stapling on;
    # ssl_stapling_verify on;
    # ssl_trusted_certificate /etc/letsencrypt/live/<your-nc-domain>/chain.pem;
    # replace with the IP address of your resolver
    # resolver 127.0.0.1; # needed for oscp stapling: e.g. use 94.140.15.15 for adguard / 1.1.1.1 for cloudflared or 8.8.8.8 for google - you can use the same nameserver as listed in your /etc/resolv.conf file
 
Please note that you need to have valid SSL certificates for your domain for this configuration to work. Steps on getting valid SSL certificates for your domain are beyond the scope of this article. You can give a web search on getting SSL certificates with letsencrypt and you will get several resources on that, or may write a blog post on it separately in the future.once your configuration for nginx is done, you can test the nginx configuration using
$ sudo nginx -t 
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
and then reload nginx with
$ sudo nginx -s reload

Step 3: Setup of Nextcloud AIO from the browser.To setup nextcloud AIO, we need to access it using the web browser on URL of our domain.tld:8080, however we do not want to open the 8080 port publicly to do this, so to complete the setup, here is a neat hack from sahilister
ssh -L 8080:127.0.0.1:8080 username:<server-ip>
you can bind the 8080 port of your server to the 8080 of your localhost using Unix socket forwarding over SSH.The port forwarding only last for the duration of your SSH session, if the SSH session breaks, your port forwarding will to. So, once you have the port forwarded, you can open the nextcloud AIO instance in your web browser at 127.0.0.1:8080
Nextcloud AIO install with docker-compose and nginx reverse proxy
you will get this error because you are trying to access a page on localhost over HTTPS. You can click on advanced and then continue to proceed to the next page. Your data is encrypted over SSH for this session as we are binding the port over SSH. Depending on your choice of browser, the above page might look different.once you have proceeded, the nextcloud AIO interface will open and will look something like this.
Nextcloud AIO install with docker-compose and nginx reverse proxynextcloud AIO initial screen with capsicums as password
It will show an auto generated passphrase, you need to save this passphrase and make sure to not loose it. For the purposes of security, I have masked the passwords with capsicums. once you have noted down your password, you can proceed to the Nextcloud AIO login, enter your password and then login. After login you will be greeted with a screen like this.
Nextcloud AIO install with docker-compose and nginx reverse proxy
now you can put the domain that you want to use in the Submit domain field. Once the domain check is done, you will proceed to the next step and see another screen like this
Nextcloud AIO install with docker-compose and nginx reverse proxy
here you can select any optional containers for the features that you might want. IMPORTANT: Please make sure to also change the time zone at the bottom of the page according to the time zone you wish to operate in.
Nextcloud AIO install with docker-compose and nginx reverse proxy
The timezone setup is also important because the data base will get initialized according to the set time zone. This could result in wrong initialization of database and you ending up in a startup loop for nextcloud. I faced this issue and could only resolve it after getting help from sahilister . Once you are done changing the timezone, and selecting any additional features you want, you can click on Download and start the containersIt will take some time for this process to finish, take a break and look at the farthest object in your room and take a sip of water. Once you are done, and the process has finished you will see a page similar to the following one.
Nextcloud AIO install with docker-compose and nginx reverse proxy
wait patiently for everything to turn green.
Nextcloud AIO install with docker-compose and nginx reverse proxy
once all the containers have started properly, you can open the nextcloud login interface on your configured domain, the initial login details are auto generated as you can see from the above screenshot. Again you will see a password that you need to note down or save to enter the nextcloud interface. Capsicums will not work as passwords. I have masked the auto generated passwords using capsicums.Now you can click on Open your Nextcloud button or go to your configured domain to access the login screen.
Nextcloud AIO install with docker-compose and nginx reverse proxy
You can use the login details from the previous step to login to the administrator account of your Nextcloud instance. There you have it, your very own cloud!

Additional Notes:

How to properly reset Nextcloud setup?While following the above steps, or while following steps from some other tutorial, you may have made a mistake, and want to start everything again from scratch. The instructions for it are present in the Nextcloud documentation here . Here is the TLDR for a docker-compose setup. These steps will delete all data, do not use these steps on an existing nextcloud setup unless you know what you are doing.
  • Stop your master container.
docker-compose -f compose.yml down -v
The above command will also remove the volume associated with the master container
  • Stop all the child containers that has been started by the master container.
docker stop nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • Remove all the child containers that has been started by the master container
docker rm nextcloud-aio-apache nextcloud-aio-notify-push nextcloud-aio-nextcloud nextcloud-aio-imaginary nextcloud-aio-fulltextsearch nextcloud-aio-redis nextcloud-aio-database nextcloud-aio-talk nextcloud-aio-collabora
  • If you also wish to remove all images associated with nextcloud you can do it with
docker rmi $(docker images --filter "reference=nextcloud/*" -q)
  • remove all volumes associated with child containers
docker volume rm <volume-name>
  • remove the network associated with nextcloud
docker network rm nextcloud-aio

Additional references.
  1. Nextcloud Github
  2. Nextcloud reverse proxy documentation
  3. Nextcloud Administration Guide
  4. Nextcloud User Manual
  5. Nextcloud Developer&aposs manual

Raju Devidas: How to remove multiple docker images for a specific pattern

altlist of docker images using docker images
In my case I want to remove all images which have the beta tag. to do thatuse docker rmi $(docker images --filter "reference=nextcloud/*:beta" -q)you can modify the reference part according to your use case.Ciao!

8 December 2023

Jonathan Dowland: The scourge of Electron, the nostalgia of Pidgin

For reasons I won't go into right now, I've spent some of this year working on a refurbished Lenovo Thinkpad Yoga 260. Despite it being relatively underpowered, I love almost everything about it. Unfortunately the model I bought has 8G RAM which turned out to be more limiting than I thought it would be. You can do incredible things with 8G of RAM: incredible, wondrous things. And most of my work, whether that's wrangling containers, hacking on OpenJDK, or complex Haskell projects, are manageable. Where it falls down is driving the modern scourge: Electron, and by proxy, lots of modern IM tools: Slack (urgh), Discord (where one of my main IRC social communities moved to), WhatsApp Web1 and even Signal Desktop. For that reason, I've (temporarily) looked at alternatives, and I was pleasantly surprised to find serviceable plugins for Pidgin, the stalwart Instant Messenger multiplexer. I originally used Pidgin (then called Gaim) back in the last century, at the time to talk to ICQ, MSN Messenger and AIM (all but ICQ2 long dead). It truly is an elegant weapon from a more civilized age.
Discord from within Pidgin Discord from within Pidgin
The plugins are3: Pidgin with all of these plugins loaded runs perfectly well and consumes fractions of the RAM that each of those Electron apps did prior. A side-effect of moving these into Pidgin (in particular Discord) is a refocussing of the content. Fewer distractions around the text. The lack of auto-link embedding, and other such things, make it a cleaner, purer experience. This made me think of the Discord community I am in (I'm really only active in one). It used to be an IRC channel of people that I met through a mutual friend. Said friend recently departed Discord, due to the signal to noise ratio being too poor, and the incessant nudge to click on links, engage, engage, engage. I wonder if the experience mediated by Pidgin would be more tolerable to them?
What my hexchat looks like What my hexchat looks like
I'm still active in one IRC channel (and inactive in many more). I could consider moving IRC into Pidgin as well. At the moment, my IRC client of choice is hexchat, which (like Pidgin) is still using GTK2 for the UI. There's something perversely pleasant about that.

  1. if you go to the trouble of trying to run it as an application distinct from your web browser.
  2. I'm still somewhat surprised ICQ is still going. I might try and recover my old ID.
  3. There may or may not be similar plugins for Slack, but as I (am forced to) use that for corporate stuff, I'm steering clear of them.

5 December 2023

Dirk Eddelbuettel: RcppArmadillo 0.12.6.6.1 on CRAN: No More Deprecation

armadillo image Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language and is widely used by (currently) 1126 other packages on CRAN, downloaded 31.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 569 times according to Google Scholar. This release ends the practice on asking Armadillo to suppress deprecation warnings. RcppArmadillo, as noted, has a large user base. Sometimes Conrad sometimes made changes without too much of a heads-up so at times it was opportune to not bring those warnings to dozens (or maybe hundreds) of packages at CRAN. Yet we need to balance this with the demonstrable need to call out older deprecated code use. So sixteen months ago, with GitHub issue #391, we started to alert author of 30+ affected packages and supplied either pull requests or emailed patches to all. Eleven months ago GitHub issues #402 was added for a second deprecation. And the time of making the switch has come. Release 0.12.6.6.1 no longer defines ARMA_IGNORE_DEPRECATED_MARKER. So among the over 1100 packages using RcppArmadillo at CRAN, around a good dozen or so were flagged in the upload but CRAN concurred and let the package migrate to CRAN. If you maintain an affected package, consider applying the patch or pull request now. A simple stop-gap measure also exists by adding -DARMA_IGNORE_DEPRECATED_MARKER to src/Makevars as either PKG_CPPFLAGS or PKG_CXXFLAGS to reactivate it. But a proper code update, which is generally simple, may be better. If you are unsure, do not hesitate to get in touch. The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.6.6.1 (2023-12-03)
  • Following the extendeded transition in #391 and #402, this release no longer sets ARMA_IGNORE_DEPRECATED_MARKER. Maintainers of affected packages have received pull requests or patches and can set -DARMA_IGNORE_DEPRECATED_MARKER as PKG_CPPFLAGS.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

4 December 2023

Ian Jackson: Don t use apt-get source; use dgit

tl;dr: If you are a Debian user who knows git, don t work with Debian source packages. Don t use apt source, or dpkg-source. Instead, use dgit and work in git. Also, don t use: VCS links on official Debian web pages, debcheckout, or Debian s (semi-)official gitlab, Salsa. These are suitable for Debian experts only; for most people they can be beartraps. Instead, use dgit. > Struggling with Debian source packages? A friend of mine recently asked for help on IRC. They re an experienced Debian administrator and user, and were trying to: make a change to a Debian package; build and install and run binary packages from it; and record that change for their future self, and their colleagues. They ended up trying to comprehend quilt. quilt is an ancient utility for managing sets of source code patches, from well before the era of modern version control. It has many strange behaviours and footguns. Debian s ancient and obsolete tarballs-and-patches source package format (which I designed the initial version of in 1993) nowadays uses quilt, at least for most packages. You don t want to deal with any of this nonsense. You don t want to learn quilt, and suffer its misbehaviours. You don t want to learn about Debian source packages and wrestle dpkg-source. Happily, you don t need to. Just use dgit One of dgit s main objectives is to minimise the amount of Debian craziness you need to learn. dgit aims to empower you to make changes to the software you re running, conveniently and with a minimum of fuss. You can use dgit to get the source code to a Debian package, as a git tree, with dgit clone (and dgit fetch). The git tree can be made into a binary package directly. The only things you really need to know are:
  1. By default dgit fetches from Debian unstable, the main work-in-progress branch. You may want something like dgit clone PACKAGE bookworm,-security (yes, with a comma).
  2. You probably want to edit debian/changelog to make your packages have a different version number.
  3. To build binaries, run dpkg-buildpackage -uc -b.
  4. Debian package builds are often disastrously messsy: builds might modify source files; and the official debian/rules clean can be inadequate, or crazy. Always commit before building, and use git clean and git reset --hard instead of running clean rules from the package.
Don t try to make a Debian source package. (Don t read the dpkg-source manual!) Instead, to preserve and share your work, use the git branch. dgit pull or dgit fetch can be used to get updates. There is a more comprehensive tutorial, with example runes, in the dgit-user(7) manpage. (There is of course complete reference documentation, but you don t need to bother reading it.) Objections But I don t want to learn yet another tool One of dgit s main goals is to save people from learning things you don t need to. It aims to be straightforward, convenient, and (so far as Debian permits) unsurprising. So: don t learn dgit. Just run it and it will be fine :-). Shouldn t I be using official Debian git repos? Absolutely not. Unless you are a Debian expert, these can be terrible beartraps. One possible outcome is that you might build an apparently working program but without the security patches. Yikes! I discussed this in more detail in 2021 in another blog post plugging dgit. Gosh, is Debian really this bad? Yes. On behalf of the Debian Project, I apologise. Debian is a very conservative institution. Change usually comes very slowly. (And when rapid or radical change has been forced through, the results haven t always been pretty, either technically or socially.) Sadly this means that sometimes much needed change can take a very long time, if it happens at all. But this tendency also provides the stability and reliability that people have come to rely on Debian for. I m a Debian maintainer. You tell me dgit is something totally different! dgit is, in fact, a general bidirectional gateway between the Debian archive and git. So yes, dgit is also a tool for Debian uploaders. You should use it to do your uploads, whenever you can. It s more convenient and more reliable than git-buildpackage and dput runes, and produces better output for users. You too can start to forget how to deal with source packages! A full treatment of this is beyond the scope of this blog post.

comment count unavailable comments

3 December 2023

Ben Hutchings: FOSS activity in September 2023

1 December 2023

Paul Wise: FLOSS Activities November 2023

Focus This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review
  • Debian packages: sponsored purple-discord x2
  • Debian wiki: RecentChanges for the month
  • Debian BTS usertags: changes for the month
  • Debian screenshots:
    • approved c-evo-dh-gtk2 fim fish foliate mpc123 nfoview qpwgraph scite viewnior
    • rejected hw-probe (photos), wine64 (desktop logo), phasex (artwork), qpwgraph (about dialog), fim/fish (help output), python-lunch (full desktop), ruby-full (website), ausweisapp2 (PII), pngtools (movie poster), x11vnc (web page,) mount (systemd), blastem (photo), ca-certificates (tiny, Windows)

Administration
  • Debian servers: extract user data from recent wiki backups
  • Debian wiki: fix broken user account, approve accounts

Communication
  • Respond to queries from Debian users and contributors on the mailing lists and IRC.

Sponsors The SWH work was sponsored. All other work was done on a volunteer basis.

23 November 2023

Freexian Collaborators: Debian Contributions: Preparing for Python 3.12, /usr-merge updates, invalid PEP-440 versions, and more! (by Utkarsh Gupta)

Contributing to Debian is part of Freexian s mission. This article covers the latest achievements of Freexian and their collaborators. All of this is made possible by organizations subscribing to our Long Term Support contracts and consulting services.

urllib3 s old security patch by Stefano Rivera Stefano ran into a test-suite failure in a new Debian package (python-truststore), caused by Debian s patch to urllib3 from a decade ago, making it enable TLS verification by default (remember those days!). Some analysis confirmed that this patch isn t useful any more, and could be removed. While working on the package, Stefano investigated the scope of the urllib3 2.x transition. It looks ready to start, not many packages are affected.

Preparing for Python 3.12 in dh-python by Stefano Rivera We are preparing to start the Python 3.12 transition in Debian. Two of the upstream changes that are going to cause a lot of packages to break could be worked-around in dh-python, so we did:
  • Distutils is no longer shipped in the Python stdlib. Packages need to Build-Depend on python3-setuptools to get a (compatibility shim) distutils. Until that happens, dh-python will Depend on setuptools.
  • A failure to find any tests to execute will now make the unittest runner exit 5, like pytest does. This was our change, to test-suites that have failed to be automatically discovered. It will cause many packages to fail to build, so until they explicitly skip running test suites, dh-python will ignore these failures.

/usr-merge by Helmut Grohne It has become clear that the planned changes to debhelper and systemd.pc cause more rc-bugs. Helmut researched these systematically and filed another stack of patches. At the time of this writing, the uploads would still cause about 40 rc-bugs. A new opt-in helper dh_movetousr has been developed and added to debhelper in trixie and unstable.

debian-printing, by Thorsten Alteholz This month Thorsten adopted two packages, namely rlpr and lprng, and moved them to the debian-printing team. As part of this Thorsten could close eight bugs in the BTS. Thorsten also uploaded a new upstream version of cups, which also meant that eleven bugs could be closed. As package hannah-foo2zjs still depended on the deprecated policykit-1 package, Thorsten changed the dependency list accordingly and could close one RC bug by the following upload.

Invalid PEP-440 Versions in Python Packages by Stefano Rivera Stefano investigated how many packages in Debian (typically Debian-native packages) recorded versions in their packaging metadata (egg-info directories) that weren t valid PEP-440 Python versions. pip is starting to enforce that all versions on the system are valid.

Miscellaneous contributions
  • distro-info-data updates in Debian, due to the new Ubuntu release, by Stefano.
  • DebConf 23 bookkeeping continues, but is winding down. Stefano still spends a little time on it.
  • Utkarsh continues to monitor and help with reimbursements.
  • Helmut continues to maintain architecture bootstrap and accidentally broke pam briefly
  • Anton uploaded boost1.83 and started to prepare a transition to make boost1.83 as a default boost version.
  • Rejuntada Debian UY 2023, a MiniDebConf that will be held in Montevideo, from 9 to 11 November, mainly organized by Santiago.

17 November 2023

Jonathan Dowland: denver luna

picture of the denver luna record on a turntable
I haven't done one of these in a while! Denver Luna is the latest single from Underworld, here on a pink 12" vinyl. The notable thing about this release was it was preceded by an "acapella" mix, consisting of just Karl Hyde's vocals: albeit treated and layered. Personally I prefer the "main" single mix, which calls back to their biggest hits. The vinyl also features an instrumental take, which is currently unavailable in any other formats. The previous single (presumably both from a forthcoming album) was and the colour red. In this crazy world we live in, this was limited to 1,000 copies. Flippers have sold 4 on eBay already, at between 55 and 75.

Next.

Previous.